Compiling Parallel Sparse Code for User � De ned Data

نویسندگان

  • Vladimir Kotlyar
  • Keshav Pingali
  • Paul Stodghill
چکیده

We describe how various sparse matrix and distribution formats can be handled using the relational approach to sparse matrix code compilation This approach allows for the development of compilation techniques that are independent of the storage formats by viewing the data structures as relations and abstracting the implementation details as access methods Introduction Sparse matrix computations are at the core of many computational science algorithms A typical application can often be separated into the discretization module which translates a continuous problem such as a system of di erential equations into a sequence of sparse matrix problems and into the solver module which solves the matrix problems Typically the solver is the most time and space intensive part of an application and quite naturally much e ort both in the numerical analysis and compilers communities has been devoted to producing e cient parallel and sequential code for sparse matrix solvers There are two challenges in generating solver code that has to be interfaced with discretization systems Di erent discretization systems produce the sparse matrices in many di erent formats Therefore the compiler should be able to generate solver code for di erent storage formats Some discretization systems partition the problem for parallel solution and use various methods for specifying the partitioning distribution Therefore a compiler should be able to produce parallel code for di erent distribution formats In our approach the programmer writes programs as if all matrices were dense and then provides a speci cation of which matrices are actually sparse and what formats distributions are used to represent them The job of the compiler is the following given a sequential dense matrix program descriptions of sparse matrix formats and data and computation distribution formats generate parallel sparse SPMD code and have introduced a relational algebra approach to solving this problem In this approach we view sparse matrices as database relations sparse matrix formats as implementations of access methods to the relations and execution of loop nests as evaluation of certain relational queries The key operator in these queries turns out to be the relational join For parallel execution we view loop nests as distributed queries and the process This research was supported by an NSF Presidential Young Investigator award CCR NSF Grant CCR and ONR grant N A version of this report appears in the proceedings of Eighth SIAM Conference on Parallel Processing for Scienti c Computing of generating SPMD node programs as the translation of distributed queries into equivalent local queries and communication statements In this paper we focus on how our compiler handles user de ned sparse data structures and distribution formats The architecture of the compiler is illustrated in Figure

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compiling Parallel Sparse Code for User-Defined Data Structures

We describe how various sparse matrix and distribution formats can be handled using the relational approach to sparse matrix code compilation. This approach allows for the development of compilation techniques that are independent of the storage formats by viewing the data structures as relations and abstracting the implementation details as access methods.

متن کامل

Athapascan-1: a Multithreaded Execution Model Based on Data Ow

We present Athapascan-1, a language implemented as a C++ library that enables the on-line computation of the macro-data ow derived from the data-dependencies of a parallel application. The parallelism is explicit but synchronization implicit. The semantics of Athapascan-1 is data driven; it is independent from the scheduling algorithm used to execute the application. The overhead introduced is ...

متن کامل

Compiling Data - Parallel Paradigms through

This paper presents a compiling technique to generate parallel code with explicit local communications for a mesh-connected distributed memory, MIMD architecture. Our compiling technique works for the geometric paradigm of parallel computation, i.e. a data-parallel paradigm where array data structures are partitioned and assigned to a set of processing nodes, which, to perform their identical t...

متن کامل

Compiling Imperfectly-nested Sparse Matrix Codes with Dependencies

We present compiler technology for generating sparse matrix code from (i) dense matrix code and (ii) a description of the indexing structure of the sparse matrices. This technology embeds statement instances into a Cartesian product of statement iteration and data spaces, and produces efficient sparse code by identifying common enumerations for multiple references to sparse matrices. This appro...

متن کامل

Compiling Geometric Paradigms through Local Communications

This paper presents a compiling technique to generate parallel code with explicit local communications for a mesh-connected distributed memory, MIMD architecture. Our compiling technique works for the geometric paradigm of parallel computation, i.e. a data-parallel paradigm where array data structures are partitioned and assigned to a set of processing nodes, which, to perform their identical t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997